2 research outputs found

    Boosting as Frank-Wolfe

    Full text link
    Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the 1\ell_1-norm regularization. LPBoost rapidly converges to an ϵ\epsilon-approximate solution in practice, but it is known to take Ω(m)\Omega(m) iterations in the worst case, where mm is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an ϵ\epsilon-approximate solution in O(1ϵ2lnmν)O(\frac{1}{\epsilon^2} \ln \frac{m}{\nu}) iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost

    Extended Formulations via Decision Diagrams

    Full text link
    We propose a general algorithm of constructing an extended formulation for any given set of linear constraints with integer coefficients. Our algorithm consists of two phases: first construct a decision diagram (V,E)(V,E) that somehow represents a given m×nm \times n constraint matrix, and then build an equivalent set of E|E| linear constraints over n+Vn+|V| variables. That is, the size of the resultant extended formulation depends not explicitly on the number mm of the original constraints, but on its decision diagram representation. Therefore, we may significantly reduce the computation time for optimization problems with integer constraint matrices by solving them under the extended formulations, especially when we obtain concise decision diagram representations for the matrices. We can apply our method to 11-norm regularized hard margin optimization over the binary instance space {0,1}n\{0,1\}^n, which can be formulated as a linear programming problem with mm constraints with {1,0,1}\{-1,0,1\}-valued coefficients over nn variables, where mm is the size of the given sample. Furthermore, introducing slack variables over the edges of the decision diagram, we establish a variant formulation of soft margin optimization. We demonstrate the effectiveness of our extended formulations for integer programming and the 11-norm regularized soft margin optimization tasks over synthetic and real datasets
    corecore